18 research outputs found

    See and Read: Detecting Depression Symptoms in Higher Education Students Using Multimodal Social Media Data

    Full text link
    Mental disorders such as depression and anxiety have been increasing at alarming rates in the worldwide population. Notably, the major depressive disorder has become a common problem among higher education students, aggravated, and maybe even occasioned, by the academic pressures they must face. While the reasons for this alarming situation remain unclear (although widely investigated), the student already facing this problem must receive treatment. To that, it is first necessary to screen the symptoms. The traditional way for that is relying on clinical consultations or answering questionnaires. However, nowadays, the data shared at social media is a ubiquitous source that can be used to detect the depression symptoms even when the student is not able to afford or search for professional care. Previous works have already relied on social media data to detect depression on the general population, usually focusing on either posted images or texts or relying on metadata. In this work, we focus on detecting the severity of the depression symptoms in higher education students, by comparing deep learning to feature engineering models induced from both the pictures and their captions posted on Instagram. The experimental results show that students presenting a BDI score higher or equal than 20 can be detected with 0.92 of recall and 0.69 of precision in the best case, reached by a fusion model. Our findings show the potential of large-scale depression screening, which could shed light upon students at-risk.Comment: This article was accepted (15 November 2019) and will appear in the proceedings of ICWSM 202

    Distance perception in a natural outdoor setting: is there a developmental trend to overconstancy?

    Get PDF
    The main purpose of the present study was to investigate whether in natural environment, using very large physical distances, there is a trend to overconstancy for distance estimates during development. One hundred and twenty-nine children aged 5 to 13 years old and twenty-one adults (in a control group), participated as observers. The observer’s task was to bisect egocentric distances, ranging from 1.0 to 296.0 m, presented in a large open field. The analyses focused on two parameters, constant errors and variable errors, such as measuring accuracy and precision, respectively. A third analysis focused on the developmental pattern of shifts in constancy as a function of age and range of distances. Constant error analysis showed that there are two relevant parameters for accuracy, age, and range of distances. For short distances, there are three developmental stages: 5-7 years, when children have unstable responses, 7-11, underconstancy, and 13 to adulthood, when accuracy is reached. For large distances, there is a two-stage development: 5-11 years, with severe underconstancy, and beyond this age, with mild underconstancy. Variable errors analyses indicate that precision is noted for 7 year-old children, independently of the range of distances. The constancy analyses indicated that there is a shift from constancy (or slightly overconstancy) to underconstancy as a function of physical distance for all age groups. The age difference is noted in the magnitude of underconstancy that occurs in larger distances, where adults presented lower levels of underconstancy than children. The present data were interpreted as due to a developmental change in cognitive processing rather than to changes in visual space perception.El principal objetivo de este estudio fue investigar si en un medio natural, empleando distancias físicas muy grandes, hay una tendencia a sobre-constancia para las estimaciones de distancias durante el desarrollo evolutivo. Participaron como observadores 129 niños de edades entre 5 y 13 años y 21 adultos (en un grupo control). La tarea de los observadores consistió en biseccionar unas distancias egocéntricas, que variaban entre 1,0 y 296,0 m, presentadas en un gran campo abierto. El análisis se centró en dos parámetros, error constante y error variable, de la exactitud y precisión de medida, respectivamente. Un tercer análisis se centró en el patrón evolutivo de cambios en la constancia en función de la edad y el rango de distancias. El análisis de los errores constantes mostró que hay dos parámetros relevantes para la precisión, edad y rango de distancias. Para distancias cortas, hay tres fases evolutivas: 5-7 años, cuando los niños dan respuestas inestables, 7-11, infra-constancia, y 13 años hasta la adultez, cuando alcanzan la exactitud (constancia). Para las distancias largas, hay un desarrollo de dos fases: 5-11 años, con infra-constancia severa, y más allá de esta edad, con ligera infraconstancia. El análisis del error variable indica que se alcanza precisión a partir de 7 años, con independencia del rango de distancias. En análisis de la constancia indica que existe un cambio desde la constancia (o una ligera sobre-constancia) a infra-constancia en función de la distancia física para todos los grupos de edad. La diferencia de edad se nota en la magnitud de la infra-constancia que ocurre en las distancias más largas, donde los adultos presentaban niveles menores de infra-constancia que los niños. Estos datos se interpretan como debidos a un cambio evolutivo en el procesamiento cognitivo más que a cambios en la percepción visual del espacio

    INTERAÇÕES ENTRE SISTEMAS DE REFERÊNCIA ALOCÊNTRICOS E EGOCÊNTRICOS: EVIDÊNCIAS DOS ESTUDOS COM DIREÇÃO PERCEBIDA

    No full text
    Sistemas de referência (frames of reference) são definidos como um locus ou conjunto de loci em relação ao qual as localizações espaciais são determinadas. Os sistemas de referência egocêntricos definem as localizações espaciais em relação ao observador, enquanto que nos sistemas de referência alocêntricos, as localizações são determinadas em relação a loci externos ao observador. Analisam-se diversas evidências da utilização destes sistemas de referência pelo sistema visual humano, tanto para tarefas de percepção visual quanto para tarefas de navegação. Hipóteses da dissociação do sistema visual em função destes sistemas de referência são evidenciadas por resultados de estudos neuropsicológicos. O conjunto das evidências experimentais indica uma efetiva interação entre codificações em sistemas de referência egocêntricos e alocêntricos, de modo que os sistemas visual e visomotor introduzem transformações nas coordenadas de um sistema em função de outro, adequando a informação às características das diferentes tarefas

    Screening for Depressed Individuals by Using Multimodal Social Media Data

    No full text
    Depression has increased at alarming rates in the worldwide population. One alternative to finding depressed individuals is using social media data to train machine learning (ML) models to identify depressed cases automatically. Previous works have already relied on ML to solve this task with reasonably good F-measure scores. Still, several limitations prevent the full potential of these models. In this work, we show that the depression identification task through social media is better modeled as a Multiple Instance Learning (MIL) problem that can exploit the temporal dependencies between posts

    Visão e percepção visual

    No full text

    Visão e percepção visual

    No full text

    Visual angle as determinant factor for relative distance perception

    No full text
    Visual angles are defined as the angle between line of sight up to the mean
 point of a relative distance and the relative distance itself. In one experiment,
 we examined the functional aspect of visual angle in relative distance
 perception using two different layouts composed by 14 stakes, one of them
 with its center 23 m away from the observation point, and the other 36 m
 away from the observation point. Verbal reports of relative distance were
 grouped in 10 categories of visual angles. Results indicated visual angle as a
 determinant factor for perceived relative distance as observed in the absence
 of perceptual errors to distances with visual angle equal or larger than 70
 degrees that could be attributed to a combination of sources of visual
 information. Another finding showed a possible intrusion of non-perceptual
 factors (observers tendencies), leading to compressed estimates to relative
 distances with visual angles smaller than 70 degrees

    One-dimensional and multi-dimensional studies of the exocentric distance estimates in frontoparallel plane, virtual space, and outdoor open field

    No full text
    The aim of this study is twofold: on the one hand, to determine how visual space, as assessed by exocentric distance estimates, is related to physical space. On the other hand, to determine the structure of visual space as assessed by exocentric distance estimates. Visual space was measured in three environments: (a) points located in a 2-D frontoparallel plane, covering a range of distances of 20 cm; (b) stakes placed in a 3-D virtual space (range H 330 mm); and (c) stakes in a 3-D outdoors open field (range = 45 m). Observers made matching judgments of distances between all possible pairs of stimuli, obtained from 16 stimuli (in a regular squared 4 × 4 matrix). Two parameters from Stevens' power law informed us about the distortion of visual space: its exponent and its coefficient of determination (R2). The results showed a ranking of the magnitude of the distortions found in each experimental environment, and also provided information about the efficacy of available visual cues of spatial layout. Furthermore, our data are in agreement with previous findings showing systematic perceptual errors, such as the further the stimuli, the larger the distortion of the area subtended by perceived distances between stimuli. Additionally, we measured the magnitude of distortion of visual space relative to physical space by a parameter of multidimensional scaling analyses, the RMSE. From these results, the magnitude of such distortions can be ranked, and the utility or efficacy of the available visual cues informing about the space layout can also be inferred
    corecore